作为一种成功的自我监督学习方法,对比学习旨在学习输入样本扭曲之间共享的不变信息。尽管对比度学习在抽样策略和架构设计方面取得了持续的进步,但仍然存在两个持续的缺陷:任务 - 核定信息的干扰和样本效率低下,这与琐碎的恒定解决方案的反复存在有关。从维度分析的角度来看,我们发现尺寸的冗余和尺寸混杂因素是现象背后的内在问题,并提供了实验证据来支持我们的观点。我们进一步提出了一种简单而有效的方法metamask,这是元学习学到的维度面膜的缩写,以学习反对维度冗余和混杂因素的表示形式。 MetAmask采用冗余技术来解决尺寸的冗余问题,并创新地引入了尺寸掩模,以减少包含混杂因子的特定维度的梯度效应,该效果通过采用元学习范式进行培训,以改善掩盖掩盖性能的目标典型的自我监督任务的表示。与典型的对比方法相比,我们提供了坚实的理论分析以证明元掩体可以获得下游分类的更严格的风险范围。从经验上讲,我们的方法在各种基准上实现了最先进的性能。
translated by 谷歌翻译
尽管自我监督的学习技术通常用于通过建模多种观点来从未标记的数据中挖掘隐性知识,但尚不清楚如何在复杂且不一致的环境中执行有效的表示学习。为此,我们提出了一种方法,特别是一致性和互补网络(Coconet),该方法利用了严格的全局视图一致性和局部跨视图互补性,以维护正则化,从而从多个视图中全面学习表示形式。在全球阶段,我们认为关键知识在观点之间隐含地共享,并增强编码器以从数据中捕获此类知识可以提高学习表示表示的可区分性。因此,保留多种观点的全球一致性可确保获得常识。 Coconet通过利用基于广义切成薄片的Wasserstein距离利用有效的差异度量测量来对齐视图的概率分布。最后,在本地阶段,我们提出了一个启发式互补性因素,该因素是跨观看歧视性知识的,它指导编码者不仅要学习视图的可辨别性,而且还学习跨视图互补信息。从理论上讲,我们提供了我们提出的椰子的基于信息理论的分析。从经验上讲,为了研究我们方法的改善,我们进行了足够的实验验证,这表明椰子的表现优于最先进的自我监督方法,这证明了这种隐含的一致性和互补性可以增强正则化的能力潜在表示的可区分性。
translated by 谷歌翻译
很少有学习模型学习人类注释有限,而这种学习范式在各种任务中证明了实用性数据使该模型无法充分探索语义信息。为了解决这个问题,我们将知识蒸馏引入了几个弹出的对象检测学习范式。我们进一步进行了激励实验,该实验表明,在知识蒸馏的过程中,教师模型的经验误差将少数拍物对象检测模型的预测性能(作为学生)退化。为了了解这种现象背后的原因,我们从因果理论的角度重新审视了几个对象检测任务上知识蒸馏的学习范式,并因此发展了一个结构性因果模型。遵循理论指导,我们建议使用基于后门调整的知识蒸馏方法,用于少数拍物检测任务,即Disentangle和Remerge(D&R),以对相应的结构性因果模型进行有条件的因果干预。从理论上讲,我们为后门标准提供了扩展的定义,即一般后门路径,可以在特定情况下扩展后门标准的理论应用边界。从经验上讲,多个基准数据集上的实验表明,D&R可以在几个射击对象检测中产生显着的性能提升。
translated by 谷歌翻译
流行的图神经网络模型在图表学习方面取得了重大进展。但是,在本文中,我们发现了一个不断被忽视的现象:用完整图测试的预训练的图表学习模型的表现不佳,该模型用良好的图表测试。该观察结果表明,图中存在混杂因素,这可能会干扰模型学习语义信息,而当前的图表表示方法并未消除其影响。为了解决这个问题,我们建议强大的因果图表示学习(RCGRL)学习可靠的图形表示,以防止混杂效应。 RCGRL引入了一种主动方法,可以在无条件的力矩限制下生成仪器变量,该方法使图表学习模型能够消除混杂因素,从而捕获与下游预测有因果关系的歧视性信息。我们提供定理和证明,以保证拟议方法的理论有效性。从经验上讲,我们对合成数据集和多个基准数据集进行了广泛的实验。结果表明,与最先进的方法相比,RCGRL实现了更好的预测性能和泛化能力。
translated by 谷歌翻译
基于对比度学习(CL)以成对的方式学习视觉表示。尽管流行的CL模型取得了长足的进步,但在本文中,我们发现了一种不断被忽视的现象:当CL模型接受完整图像训练时,以完整图像测试的性能要比前景区域的表现更好。当CL模型接受前景区域训练时,以完整图像测试的性能要比前景区域差。该观察结果表明,图像中的背景可能会干扰模型学习语义信息及其影响尚未完全消除。为了解决这个问题,我们建立了一个结构性因果模型(SCM),以建模背景作为混杂因素。我们提出了一种基于后门调整的正则化方法,即用元语义正常器(ICL-MSR)进行介入的对比度学习,以对所提出的SCM进行因果干预。可以将ICL-MSR纳入任何现有的CL方法中,以减轻代表学习的背景干扰。从理论上讲,我们证明ICL-MSR达到了更严格的误差。从经验上讲,我们在多个基准数据集上的实验表明,ICL-MSR能够改善不同最先进的CL方法的性能。
translated by 谷歌翻译
对比度学习重要的是什么?我们认为,对比度学习在很大程度上取决于信息丰富的特征或“硬”(正面或负面)特征。早期作品包括通过应用复杂的数据增强和较大的批量尺寸或内存库以及最近的作品设计精心设计的采样方法来探索信息丰富的功能,包括更有信息的功能。探索此类功能的关键挑战是,通过应用随机数据增强来生成源多视图数据,这使得始终在增强数据中添加有用的信息是不可行的。因此,从这种增强数据中学到的功能的信息有限。作为回应,我们建议直接增强潜在空间中的特征,从而在没有大量输入数据的情况下学习判别性表示。我们执行一种元学习技术来构建通过考虑编码器的性能来更新其网络参数的增强生成器。但是,输入数据不足可能会导致编码器学习折叠功能,从而导致增强发生器故障。在目标函数中进一步添加了新的注入边缘的正则化,以避免编码器学习退化映射。为了对比一个梯度背部传播步骤中的所有特征,我们采用了提出的优化驱动的统一对比损失,而不是常规的对比损失。从经验上讲,我们的方法在几个基准数据集上实现了最新的结果。
translated by 谷歌翻译
最近的作品以自我监督的方式探索学习图表表示。在图形对比学习中,基准方法应用各种图形增强方法。但是,大多数增强方法都是不可学习的,这导致发出不束缚的增强图。这种增强可以缩短曲线图对比学学习方法的表现能力。因此,我们激励我们的方法通过可学习的图形增强器来生成增强图,称为元图形增强器(Mega)。然后,我们阐明了“良好”的图形增强必须在特征级别的实例级别和信息性上具有均匀性。为此,我们提出了一种新颖的方法来学习图形增强者,可以以统一和信息性产生增强。图表增强器的目的是促进我们的特征提取网络,以学习更辨别的特征表示,这激励我们提出元学范式。经验上,多个基准数据集的实验表明,Mega优于图形自我监督学习任务中的最先进的方法。进一步的实验研究证明了巨型术语的有效性。
translated by 谷歌翻译
Cooperative multi-agent reinforcement learning (c-MARL) is widely applied in safety-critical scenarios, thus the analysis of robustness for c-MARL models is profoundly important. However, robustness certification for c-MARLs has not yet been explored in the community. In this paper, we propose a novel certification method, which is the first work to leverage a scalable approach for c-MARLs to determine actions with guaranteed certified bounds. c-MARL certification poses two key challenges compared with single-agent systems: (i) the accumulated uncertainty as the number of agents increases; (ii) the potential lack of impact when changing the action of a single agent into a global team reward. These challenges prevent us from directly using existing algorithms. Hence, we employ the false discovery rate (FDR) controlling procedure considering the importance of each agent to certify per-state robustness and propose a tree-search-based algorithm to find a lower bound of the global reward under the minimal certified perturbation. As our method is general, it can also be applied in single-agent environments. We empirically show that our certification bounds are much tighter than state-of-the-art RL certification solutions. We also run experiments on two popular c-MARL algorithms: QMIX and VDN, in two different environments, with two and four agents. The experimental results show that our method produces meaningful guaranteed robustness for all models and environments. Our tool CertifyCMARL is available at https://github.com/TrustAI/CertifyCMA
translated by 谷歌翻译
Modern autonomous driving system is characterized as modular tasks in sequential order, i.e., perception, prediction and planning. As sensors and hardware get improved, there is trending popularity to devise a system that can perform a wide diversity of tasks to fulfill higher-level intelligence. Contemporary approaches resort to either deploying standalone models for individual tasks, or designing a multi-task paradigm with separate heads. These might suffer from accumulative error or negative transfer effect. Instead, we argue that a favorable algorithm framework should be devised and optimized in pursuit of the ultimate goal, i.e. planning of the self-driving-car. Oriented at this goal, we revisit the key components within perception and prediction. We analyze each module and prioritize the tasks hierarchically, such that all these tasks contribute to planning (the goal). To this end, we introduce Unified Autonomous Driving (UniAD), the first comprehensive framework up-to-date that incorporates full-stack driving tasks in one network. It is exquisitely devised to leverage advantages of each module, and provide complementary feature abstractions for agent interaction from a global perspective. Tasks are communicated with unified query design to facilitate each other toward planning. We instantiate UniAD on the challenging nuScenes benchmark. With extensive ablations, the effectiveness of using such a philosophy is proven to surpass previous state-of-the-arts by a large margin in all aspects. The full suite of codebase and models would be available to facilitate future research in the community.
translated by 谷歌翻译
Pre-trained language models for programming languages have shown a powerful ability on processing many Software Engineering (SE) tasks, e.g., program synthesis, code completion, and code search. However, it remains to be seen what is behind their success. Recent studies have examined how pre-trained models can effectively learn syntax information based on Abstract Syntax Trees. In this paper, we figure out what role the self-attention mechanism plays in understanding code syntax and semantics based on AST and static analysis. We focus on a well-known representative code model, CodeBERT, and study how it can learn code syntax and semantics by the self-attention mechanism and Masked Language Modelling (MLM) at the token level. We propose a group of probing tasks to analyze CodeBERT. Based on AST and static analysis, we establish the relationships among the code tokens. First, Our results show that CodeBERT can acquire syntax and semantics knowledge through self-attention and MLM. Second, we demonstrate that the self-attention mechanism pays more attention to dependence-relationship tokens than to other tokens. Different attention heads play different roles in learning code semantics; we show that some of them are weak at encoding code semantics. Different layers have different competencies to represent different code properties. Deep CodeBERT layers can encode the semantic information that requires some complex inference in the code context. More importantly, we show that our analysis is helpful and leverage our conclusions to improve CodeBERT. We show an alternative approach for pre-training models, which makes fully use of the current pre-training strategy, i.e, MLM, to learn code syntax and semantics, instead of combining features from different code data formats, e.g., data-flow, running-time states, and program outputs.
translated by 谷歌翻译